28 research outputs found

    The DRIVE-SAFE project: signal processing and advanced information technologies for improving driving prudence and accidents

    Get PDF
    In this paper, we will talk about the Drivesafe project whose aim is creating conditions for prudent driving on highways and roadways with the purposes of reducing accidents caused by driver behavior. To achieve these primary goals, critical data is being collected from multimodal sensors (such as cameras, microphones, and other sensors) to build a unique databank on driver behavior. We are developing system and technologies for analyzing the data and automatically determining potentially dangerous situations (such as driver fatigue, distraction, etc.). Based on the findings from these studies, we will propose systems for warning the drivers and taking other precautionary measures to avoid accidents once a dangerous situation is detected. In order to address these issues a national consortium has been formed including Automotive Research Center (OTAM), Koç University, Istanbul Technical University, Sabancı University, Ford A.S., Renault A.S., and Fiat A. Ş

    Experiments on decision fusion for driver recognition

    Get PDF
    In this work, we study the individual as well as combined performance of various driving behavior signals on identifying the driver of a motor vehicle. We investigate a number of classifier fusion techniques to combine multiple channel decisions. We observe that some driving signals carry more biometric information than others. When we use trainable combining methods, we can reduce identification error significantly using only driving behavior signals. Classifier combination methods seem to be very useful in multi-modal biometric identification in a car environment

    Multimodal person recognition for human-vehicle interaction

    Get PDF
    Next-generation vehicles will undoubtedly feature biometric person recognition as part of an effort to improve the driving experience. Today's technology prevents such systems from operating satisfactorily under adverse conditions. A proposed framework for achieving person recognition successfully combines different biometric modalities, borne out in two case studies

    Applications of Itakura-Saito type spectral distortion measures to image analysis and classification

    No full text
    IEEE Computer Soc;Naval Postgraduate Sch, Monterey, CA, USA24th Asilomar Conference on Signals, Systems and Computers Part 2 (of 2) -- 5 November 1990 through 7 November 1990 -- Pacific Grove, CA, USA -- 16228The authors have applied Itakura-Saito (IS) type spectral distortion measures for classifying segments of digitized images using the multidimensional multichannel linear prediction (MLPC) theory. There has been only one study to apply the gain-normalized Itakura-Saito distortion measure for classification of arbitarily shaped image textures. P. A. Maragos et al. (1983) have demonstrated that textures can be classified by two-dimensional linear predictive models under a spectral distortion measure. The authors have extended this case to the more difficult multichannel linear predictive modeling of arbitrarily shaped image textures. All three forms of the Itakura-Saito distortion measure, gain sensitive (GS), gain optimized (GO), and gain normalized (GN), have been extended to the multichannel linear prediction case. The authors have successfully used them in the template design and the classification modes

    Driver emotion profiling from speech

    No full text
    Humans sense, perceive, and convey emotion differently from each other due to physical, psychological, environmental, cultural, and language differences. For example, as recognized and studied by psychologists more than a century, it is easier for someone of the same culture to judge and recognize emotion correctly compared to those from different culture. In this chapter, we attempt to study the speech emotion recognition problem by using two speech corpora from the Berlin dataset and the NAW datasets. We have investigated the universality as well as diversity of two different cultural speech datasets recorded by German and American speakers, respectively. Experiments were conducted for identifying three basic emotions, namely, angry, sad, and happy with neutral as emotionless state from these datasets. MFCC coefficients were used as feature sets in the experiments, and MLP was employed as classifiers to compare the performance of these datasets. In addition, real-time recorded speech from drivers was also tested to see the performance in a vehicular setting. Finally, speech emotion profiling approach was introduced to explore the universality and diversity of the speech emotion features
    corecore